Chain-of-thought prompting
Kojima2022large demonstrates that, simply by adding “Let’s think step by step” in the prompt, one can make the LLMs better reason through the problem. This is called Chain-of-thought prompting. It was also argued that this makes the language model more “performant” at producing toxic and biased content. See Shaikh2022second. ^04ff06
Deng2024from studies explicit vs. implicit CoT.